63 research outputs found

    Proceedings of the First Workshop on Requirements Prioritisation and Enactment, PrioRE’17 : In conjunction with REFSQ 2017 : Preface

    Get PDF
    It is our pleasure to welcome the reader to the proceedings of the First International Workshop on Requirements Prioritization and Enactment, PrioRE’17 co-located with the 23rd International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 20167) held in Essen, Germany. First in its series, the PrioRE’17 aims to play a role in bringing research experts and practitioners together, and provide them with a platform to exchange their visions on requirements prioritization, release planning and their enactment. The main focus will be on how requirements can be best prioritized, and how the releases are planned and enactedPostprint (published version

    Automated Test Case Generation as a Many-Objective Optimisation Problem with Dynamic Selection of the Targets

    Get PDF
    The test case generation is intrinsically a multi-objective problem, since the goal is covering multiple test targets (e.g., branches). Existing search-based approaches either consider one target at a time or aggregate all targets into a single fitness function (whole-suite approach). Multi and many-objective optimisation algorithms (MOAs) have never been applied to this problem, because existing algorithms do not scale to the number of coverage objectives that are typically found in real-world software. In addition, the final goal for MOAs is to find alternative trade-off solutions in the objective space, while in test generation the interesting solutions are only those test cases covering one or more uncovered targets. In this paper, we present DynaMOSA (Dynamic Many-Objective Sorting Algorithm), a novel many-objective solver specifically designed to address the test case generation problem in the context of coverage testing. DynaMOSA extends our previous many-objective technique MOSA (Many-Objective Sorting Algorithm) with dynamic selection of the coverage targets based on the control dependency hierarchy. Such extension makes the approach more effective and efficient in case of limited search budget. We carried out an empirical study on 346 Java classes using three coverage criteria (i.e., statement, branch, and strong mutation coverage) to assess the performance of DynaMOSA with respect to the whole-suite approach (WS), its archive-based variant (WSA) and MOSA. The results show that DynaMOSA outperforms WSA in 28% of the classes for branch coverage (+8% more coverage on average) and in 27% of the classes for mutation coverage (+11% more killed mutants on average). It outperforms WS in 51% of the classes for statement coverage, leading to +11% more coverage on average. Moreover, DynaMOSA outperforms its predecessor MOSA for all the three coverage criteria in 19% of the classes with +8% more code coverage on average

    Managing Multi-Lingual User Feedback: The SUPERSEDE Project Experience

    Get PDF
    Context & Motivation] In the SUPERSEDE project, methods and tools have been developed to collect and analyze user feedback, to identify relevant information for deciding which are the most impor- tant requirements to be considered for the next release of a product. [Question/problem] Even if the project proposal was to analyze feed- back in the English language only, later it emerged that there was a need to analyze multi-lingual (German, English) feedback. [Principal ideas] We considered two different solutions: 1) translating user feed- back from German to English, and processing it with the techniques developed for the English language; 2) exploiting Natural Language Processing (NLP) techniques for German to analyze directly the feed- back in German. [Contribution] In this short report we describe this project experience, summarizing main commonalities and differences between the aforementioned solutions

    Java unit testing tool competition

    Get PDF
    We report on the results of the eighth edition of the Java unit testing tool competition. This year, two tools, EvoSuite and Randoop, were executed on a benchmark with (i) new classes under test, selected from open-source software projects, and (ii) the set of classes from one project considered in the previous edition. We relied on an updated infrastructure for the execution of the different tools and the subsequent coverage and mutation analysis based on Docker containers. We considered two different time budgets for test case generation: one an three minutes. This paper describes our methodology and statistical analysis of the results, presents the results achieved by the contestant tools and highlights the challenges we faced during the competition

    JUGE: An Infrastructure for Benchmarking Java Unit Test Generators

    Full text link
    Researchers and practitioners have designed and implemented various automated test case generators to support effective software testing. Such generators exist for various languages (e.g., Java, C#, or Python) and for various platforms (e.g., desktop, web, or mobile applications). Such generators exhibit varying effectiveness and efficiency, depending on the testing goals they aim to satisfy (e.g., unit-testing of libraries vs. system-testing of entire applications) and the underlying techniques they implement. In this context, practitioners need to be able to compare different generators to identify the most suited one for their requirements, while researchers seek to identify future research directions. This can be achieved through the systematic execution of large-scale evaluations of different generators. However, the execution of such empirical evaluations is not trivial and requires a substantial effort to collect benchmarks, setup the evaluation infrastructure, and collect and analyse the results. In this paper, we present our JUnit Generation benchmarking infrastructure (JUGE) supporting generators (e.g., search-based, random-based, symbolic execution, etc.) seeking to automate the production of unit tests for various purposes (e.g., validation, regression testing, fault localization, etc.). The primary goal is to reduce the overall effort, ease the comparison of several generators, and enhance the knowledge transfer between academia and industry by standardizing the evaluation and comparison process. Since 2013, eight editions of a unit testing tool competition, co-located with the Search-Based Software Testing Workshop, have taken place and used and updated JUGE. As a result, an increasing amount of tools (over ten) from both academia and industry have been evaluated on JUGE, matured over the years, and allowed the identification of future research directions

    Model-based Player Experience Testing with Emotion Pattern Verification

    Get PDF
    Player eXperience (PX) testing has attracted attention in the game industry as video games become more complex and widespread. Understanding players’ desires and their experience are key elements to guarantee the success of a game in the highly competitive market. Although a number of techniques have been introduced to measure the emotional aspect of the experience, automated testing of player experience still needs to be explored. This paper presents a framework for automated player experience testing by formulating emotion patterns’ requirements and utilizing a computational model of players’ emotions developed based on a psychological theory of emotions along with a model-based testing approach for test suite generation. We evaluate the strength of our framework by performing mutation test. The paper also evaluates the performance of a search-based generated test suite and LTL model checking-based test suite in revealing various variations of temporal and spatial emotion patterns. Results show the contribution of both algorithms in generating complementary test cases for revealing various emotions in different locations of a game level

    Evolutionary Test Case Generation via Many Objective Optimization and Stochastic Grammars

    No full text
    In search based test case generation, most of the research works focus on the single-objective formulation of the test case generation problem. However, there are a wide variety of multi- and many-objective optimization strategies that could offer advantages currently not investigated when addressing the problem of test case generation. Furthermore, existing techniques and available tools mainly handle test generation for programs with primitive inputs, such as numeric or string input. The techniques and tools applicable to such types of programs often do not effectively scale up to large sizes and complex inputs. In this thesis work, at the unit level, branch coverage is reformulated as a many-objective optimization problem, as opposed to the state of the art single-objective formulation, and a novel algorithm is proposed for the generation of branch adequate test cases. At the system level, this thesis proposes a test generation approach that combines stochastic grammars with genetic programming for the generation of branch adequate test cases. Furthermore, the combination of stochastic grammars and genetic programming is also investigated in the context of field failure reproduction for programs with highly structured input

    A Search-Based Framework for Failure Reproduction

    No full text
    The task of debugging software failures is generally time consuming and involves substantial manual effort. A crucial part of this task lies in the reproduction of the reported failure at the developer’s site. In this paper, we propose a novel framework that aims to address the problem of failure reproduction by employing an adaptive search-based approach in combination with a limited amount of instrumentation. In particular, we formulate the problem of reproducing failures as a search problem: reproducing a software failure can be viewed as the search for a set of inputs that lead its execution to the failing path. The search is guided by information obtained through instrumentation. Preliminary experiments on small-size programs show promising results in which the proposed approach outperforms random search

    Message from the Program Co-Chairs

    No full text
    Welcome to the proceedings of the 15th International Conference on Software Testing, Verification and Validation (ICST 2022). The conference aims to provide a common forum for researchers, scientists, engineers, and practitioners throughout the world to present their latest research findings, ideas, developments, and applications in the area of Software Testing, Verification, and Validation
    • …
    corecore